Web Survey Bibliography
With the ever increasing difficulty of conducting RDD interviews in the United States, it was desired to evaluate the effectiveness of offering cash incentives to participants. Incentives historically have had some success (but the success of these incentives is highly dependent upon the population, type of incentive (cash versus lottery or other gifts), timing of the incentive (pre or post data collection) and mode of data collection). Results of offering cash incentives in two omnibus RDD studies; one national and one at the state level only, were examined to determine the impact of incentives on response rates, cooperation rates, demographic characteristics of respondents, and actual responses to substantive questions. In addition to offering incentives to participants, the state
‐wide study also offered incentives to the interviewers. It was hypothesized that offering incentives to interviewers would have a positive impact on overall response rates, cooperation rates, and quality of data. Both studies were designed as randomized control trials. The national study used an RDD sample, and the sample was randomly assigned an incentive or non‐incentive status (with approximately half the sample being assigned an incentive status and the other half being assigned a non‐incentive status). If flagged for an incentive, the potential respondent was offered a $10 incentive upon completion of the interview. A check in the amount of $10 was mailed to the respondent post completion of the interview. If not flagged for an incentive, no incentive was offered to the participant. The state‐wide survey was slightly more complicated in that there were two levels of incentives offered. The first level of incentive was similar to the national study, offering an incentive to participants. The RDD sample was again randomly assigned an incentive or non‐incentive status (with approximately half the sample assigned an incentive status and the other half not). Again, if flagged for an incentive, participants were offered $10 cash after completing the survey. If not flagged for an incentive, no incentive was offered. The second level of incentive was offered to the interviewers. A $10 cash incentive was offered to interviewers every odd week of the data collection period for each interview they successfully complete above and beyond their minimal required productivity targets. Interviewers were required to maintain a minimum productivity pace regardless of being offered an incentive or not. This minimum productivity criteria included completing a minimum number of interviews per hour, maintaining the average expected number of refusals per hour, maintaining the required dialing pace, abiding by the facility attendance requirements , and maintaining high data collection quality standards. These criteria were held constant so as to permit the evaluation of the introduction of an incentive on improving response rates by interviewers. This state‐wide study was essentially a two by two randomized control trial with four possible “incentive” groups: 1) respondent incentive only, 2) interviewer incentive only, 3) respondent and interviewer incentive, and 4) no incentive at all. Overall, the findings were somewhat surprising and proved to improve response rates and cooperation rates among certain demographic groups only, with no effect on other groups. There were slight improved overall response rates for those age 34 and under (and in particular those aged 24 or younger). Those aged 24 and younger are the most difficult populations to reach now by telephone (mainly due to the increase use of cell phone only use). Interestingly, the impact on interviewer performance was negligible. Based on the findings of these two experiments, offering incentives only to certain demographic groups (the younger age groups) may be the best use of cash incentives.
Conference homepage (abstract)
Web survey bibliography (317)
- Overview: Online Surveys; 2017; Vehovar, V.; Lozar Manfreda, K.
- Respondent mode choice in a smartphone survey ; 2017; Conrad, F. G., Schober, M. F., Antoun, C., Yan, H. Y., Hupp, A., Johnston, M., Ehlen, P., Vickers, L...
- Collecting Data from mHealth Users via SMS Surveys: A Case Study in Kenya; 2016; Johnson, D.
- Electronic and paper based data collection methods in library and information science research: A comparative...; 2016; Tella, A.
- Stable Relationships, Stable Participation? The Effects of Partnership Dissolution and Changes in Relationship...; 2016; Mueller, B.; Castiglioni, L.
- Identifying Pertinent Variables for Nonresponse Follow-Up Surveys. Lessons Learned from 4 Cases in Switzerland...; 2016; Vandenplas, C.; Joye, D.; Staehli, M. E.; Pollien, A.
- The 2013 Census Test: Piloting Methods to Reduce 2020 Census Costs; 2016; Walejko, G. K.; Miller, P. V.
- The Validity of Surveys: Online and Offline; 2016; Wiersma, W.
- Methods can matter: Where Web surveys produce different results than phone interviews; 2016; Keeter, S.
- Do Polls Still Work If People Don't Answer Their Phones?; 2016; Edwards-Levy, A.; Jackson, N. M.
- HUFFPOLLSTER: Why Reaching Latinos Is A Challenge For Pollsters; 2016; Jackson, N. M.; Edwards-Levy, A.; Velencia, J.
- Comprehension and engagement in survey interviews with virtual agents; 2016; Conrad, F. G.; Schober, M. F.; Jans, M.; Orlowski, R. A.; Nielsen, D.; Levenstein, R. M.
- An Overview of Mobile CATI Issues in Europe; 2015; Slavec, A.; Toninelli, D.
- Using Mobile Phones for High-Frequency Data Collection; 2015; Azevedo, J. P.; Ballivian, A.; Durbin, W.
- Mixed mode surveys ; 2015; Burton, J.
- Two Are Better Than One: The Use of a Mixed-Mode Data Collection to Improve the Electoral Forecast; 2014; de Rada, V. D., Pasadas del Amo, S.
- The impact of contact effort on mode-specific selection and measurement bias; 2014; Schouten, B., van der Laan, J., Cobben, F.
- How much is shorter CAWI questionnaire VS CATI questionnaire?; 2014; Bartoli, B.
- Advantages of a global multimodal print & digital readership survey; 2013; Cour, N., Saint-Joanis, G.
- Relative Mode Effects on Data Quality in Mixed-Mode Surveys by an Instrumental Variable; 2013; Vannieuwenhuyze, J. T. A., Revilla, M.
- A report on the Confirmit Market Research Software Survey 2013; 2013; Macer, T., Wilson, S.
- Mode effect analysis and adjustment in a split-sample mixed-mode Web/CATI survey; 2013; Kolenikov, S., Kennedy, C.
- Evaluating the left‐right dimension: Category Selection Probing conducted in an online access...; 2013; Huefken , V.
- Methodological, legal and technical perspectives on the feasibility of web survey paradata in German...; 2013; Sattelberger, S.
- Impact of mode design on reliability in longitudinal data; 2013; Cernat, A.
- Exploring patterns of academic usage: A Google Scholar based study of ESS, EVS, WVS and ISSP academic...; 2013; Malnar, B.
- Web questionnaires in official population surveys: Do's and don'ts First experiments and impacts...; 2013; Blanke, K.
- Mode effects in Labour Force Surveys - do they really matter?; 2013; Koerner, T.
- Measuring the same concepts in several modes in the "BIBB/BAuA-Employee-Survey 2011/12" ; 2013; Gensicke, M., Tschersich, N., Hartmann, J.
- What works? Getting the General Population To Go Online in a Mixed Mode Local Health Survey; 2013; Frigault, L.-R., Azzou, S. A. K., Molloy, E. J. K., Ammarguellat, F., Couture, M., Gratton, J.
- Using Technology to Conduct Questionnaire Evaluations with Hard to Reach Populations ; 2013; Ridolfo, H., Ott, K.
- Mode Effects in a National Establishment Survey; 2013; Daley, K., Phillips, B. T.
- Evaluating the Effect of a Non-Monetary Incentive in a Nationally Representative Mixed-Mode Establishment...; 2013; Sengupta, M., Harris-Kojetin, L., Hobbs, M., Greene, A.
- Survey Reminder Method Experiment: An Examination of Cost Efficiency and Reminder Mode Salience in the...; 2013; Anderson, M., Rogers, B., CyBulski, K., Hall, J. W., Alderks, C. E., Milazzo-Sayre, L.
- Experiences from a probability-based Internet panel: Sample, recruitment and participation; 2013; Scherpenzeel, A.
- An Evaluation of Internet Versus Paper-based Methods for Public Participation Geographic Information...; 2012; Pocewicz, A.; Nielsen-Pincus, M.; Brown, G.; Schnitzer, R.
- Using paradata to explore item-level response times in surveys; 2012; Couper, M. P., Kreuter, F.
- Specialized Tools for Measuring Past Events ; 2012; Belli, R. F.
- Modes of Data Collection; 2012; Tourangeau, R.
- Mode and non-response effects and their treatment; 2012; Chrysanthopoulos, S., Georgostathi, A.
- “I think I know what you did last summer” Improving data quality in panel surveys; 2012; Lugtig, P. J.
- Using Text-to-Speech (TTS) for Audio-CASI; 2012; Couper, M. P., Kirgis, N., Buageila, S., Berglund, P.
- Does Mode Matter? Initial Evidence from the German Longitudinal Election Study (GLES); 2012; Blumenstiel, J. E., Rossmann, J.
- The Representativity of Web Surveys of the General Population compared to Traditional Modes and Mixed...; 2012; Klausch, L. T., Schouten, B., Hox, J.
- Effects of speeding on satisficing in Mixed-Mode Surveys; 2011; Bathelt, S., Bauknecht, J.
- Web based CATI on Amazon Elastic Compute Cloud and VirtualBox using queXS; 2011; Zammit, A.
- Web/Cloud Based CATI Using queXS; 2011; Zammit, A.
- When Referring to Mode, Is Expressed Preference the Same as Reality?; 2011; Denk, K.
- Three Era's of Survey Research; 2011; Groves, R. M.
- Testing a single mode vs a mixed mode design; 2011; Laaksonen, S.